Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Community mining algorithm based on multi-relationship of nodes and its application
Lin ZHOU, Yuzhi XIAO, Peng LIU, Youpeng QIN
Journal of Computer Applications    2023, 43 (5): 1489-1496.   DOI: 10.11772/j.issn.1001-9081.2022081218
Abstract289)   HTML14)    PDF (4478KB)(136)       Save

In order to measure the similarity of multi-relational nodes and mine the community structure with multi-relational nodes, a community mining algorithm based on multi-relationship of nodes, called LSL-GN, was proposed. Firstly, based on node similarity and node reachability, LHN-ISL, a similarity measurement index for multi-relational nodes, was described to reconstruct the low-density model of the target network, and the community division was completed by combining with GN (Girvan-Newman) algorithm. The LSL-GN algorithm was compared with several classical community mining algorithms on Modularity (Q value), Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI). The results show that LSL-GN algorithm achieves the best results in terms of three indexes, indicating that the community division quality of LSL-GN is better. The “User-Application” mobile roaming network model was divided by LSL-GN algorithm into community structures based on basic applications such as Ctrip, Amap and Didi Travel. These results of community division can provide strategic reference information for designing personalized package services.

Table and Figures | Reference | Related Articles | Metrics
Key node mining in complex network based on improved local structural entropy
Peng LI, Shilin WANG, Guangwu CHEN, Guanghui YAN
Journal of Computer Applications    2023, 43 (4): 1109-1114.   DOI: 10.11772/j.issn.1001-9081.2022040562
Abstract525)   HTML27)    PDF (1367KB)(250)       Save

The identification of key nodes in complex network plays an important role in the optimization of network structure and effective propagation of information. Local structural Entropy (LE) can be used to identify key nodes by using the influence of the local network on the whole network instead of the influence of nodes on the whole network. However, the cases of the highly aggregative network and nodes forming a loop with neighbor nodes are not considered in LE, which leads to some limitations. To address these limitations, firstly, an improved LE based node importance evaluation method, namely PLE (Penalized Local structural Entropy), was proposed, in which based on the LE, the Clustering Coefficient (CC) was introduced as a penalty term to penalize the highly aggregative nodes in the network appropriately. Secondly, due to the fact that the penalty of PLE penalizing the nodes in triadic closure structure is too much, an improved method of PLE, namely PLEA (Penalized Local structural Entropy Advancement) was proposed, in which control coefficient was introduced in front of the penalty term to control the penalty strength. Selective attack experiments on five real networks with different sizes were conducted. Experimental results show that in the western US states grid and the US Airlines, PLEA has the identification accuracy improved by 26.3% and 3.2% compared with LE respectively, by 380% and 5.43% compared with K-Shell (KS) method respectively, and by 14.4% and 24% compared with DCL (Degree and Clustering coefficient and Location) method respectively. The key nodes identified by PLEA can cause more damage to the network, verifying the rationality of introducing the CC as a penalty term, and the effectiveness and superiority of PLEA. The integration of the number of neighbors and the local network structure of nodes with the simplicity of computation makes it more effective in describing the reliability and invulnerability of large-scale networks.

Table and Figures | Reference | Related Articles | Metrics
Multiple clustering algorithm based on dynamic weighted tensor distance
Zhuangzhuang XUE, Peng LI, Weibei FAN, Hongjun ZHANG, Fanshuo MENG
Journal of Computer Applications    2023, 43 (11): 3449-3456.   DOI: 10.11772/j.issn.1001-9081.2022101626
Abstract168)   HTML1)    PDF (2437KB)(99)       Save

When measuring the importance of attributes in Tensor-based Multiple Clustering algorithm (TMC), the relevance of attribute combinations within object tensors are ignored, and the selected and unselected feature space are incompletely separated because of the fixed weight strategy under different feature space selection. For above problems, a Multiple Clustering algorithm based on Dynamic Weighted Tensor Distance (DWTD-MC) was proposed. Firstly, a self-association tensor model was constructed to improve the accuracy of attribute importance measurement of each feature space. Then, a multi-view weight tensor model was built to meet the task requirements of multiple clustering analysis by dynamic weighting strategy under different feature space selection. Finally, the dynamic weighted tensor distance was used to measure the similarity of data points, generating multiple clustering results. Simulation results on real datasets show that DWTD-MC outperforms comparative algorithms such as TMC in terms of Jaccard Index (JI), Dunn Index (DI), Davies-Bouldin index (DB) and Silhouette Coefficient (SC). It can obtain high quality clustering results while maintaining low redundancy among clustering results, as well as meeting the task requirements of multiple clustering analysis.

Table and Figures | Reference | Related Articles | Metrics
Data management method for building internet of things based on Hashgraph
Xu WANG, Yumin SHEN, Xiaoyun XIONG, Peng LI, Jinlong WANG
Journal of Computer Applications    2022, 42 (8): 2471-2480.   DOI: 10.11772/j.issn.1001-9081.2021060958
Abstract267)   HTML10)    PDF (1264KB)(82)       Save

A Hashgraph-based data management method for building Internet of Things (IoT) was proposed to address the problems of severe lack of throughput and high response delay when applying blockchain to the building IoT scenarios. In this method, Directed Acyclic Graph (DAG) was used for data storage to increase the throughput performance of blockchain because of the high concurrency of schematic structure; Hashgraph algorithm was applied to reach consensus on the data stored in DAG to reduce the time consumption of consensus; the smart contracts were designed to realize access control to prevent unauthorized users from operating data. Caliper, a blockchain performance testing tool, was adopted for performance test. The results show that in a medium-scale simulation environment with 32 nodes, the throughput of the proposed method is 1 063.1 transactions per second, which is 6 times and 3 times than that of the edge computing and the cross-chain methods; the data storage delay and control delay of the proposed method are 4.57 seconds and 4.92 seconds respectively, indicating that the proposed method has the response speed better than the comparison methods; and the transaction success rate of this method reaches 87.4% in spike testing. At the same time, the prototype system based on this method can run stably for 120 hours in stability testing. The above illustrates that the proposed method can effectively improve the throughput and response speed of blockchain, and meets actual needs in the building IoT scenarios.

Table and Figures | Reference | Related Articles | Metrics
Smart contract-based access control architecture and verification for internet of things
Yang LI, Long XU, Yanqiang LI, Shaopeng LI
Journal of Computer Applications    2022, 42 (6): 1922-1931.   DOI: 10.11772/j.issn.1001-9081.2021040553
Abstract359)   HTML23)    PDF (3580KB)(125)       Save

Concerning the problem that the traditional access control methods face single point of failure and fail to provide trusted, secure and dynamic access management, a new access control model based on blockchain and smart contract for Wireless Sensor Network (WSN) was proposed to solve the problems of access dynamics and low level of intelligence of existing blockchain-based access control methods. Firstly, a new access control architecture based on blockchain was proposed to reduce the network computing overhead. Secondly, a multi-level smart contract system including Agent Contract (AC), Authority Management Contract (AMC) and Access Control Contract (ACC) was built, thereby realizing the trusted and dynamic access management of WSN. Finally, the dynamic access generation algorithm based on Radial Basis Function (RBF) neural network was adopted, and access policy was combined to generate the credit score threshold of access node to realize the intelligent, dynamic access control management for the large number of sensors in WSN. Experimental results verify the availability, security and effectiveness of the proposed model in WSN secure access control applications.

Table and Figures | Reference | Related Articles | Metrics
Time-frequency domain CT reconstruction algorithm based on convolutional neural network
Kunpeng LI, Pengcheng ZHANG, Hong SHANGGUAN, Yanling WANG, Jie YANG, Zhiguo GUI
Journal of Computer Applications    2022, 42 (4): 1308-1316.   DOI: 10.11772/j.issn.1001-9081.2021050876
Abstract369)   HTML12)    PDF (3307KB)(141)       Save

Concerning the problems of artifacts and loss of image details in the analytically reconstructed image by time-domain filters, a new time-frequency domain Computed Tomography (CT) reconstruction algorithm based on Convolutional Neural Network (CNN) was proposed. Firstly, a filter network based on a convolutional neural network was constructed in the frequency domain to achieve the frequency-domain filtering of the projection data. Secondly, the back-projection operator was used to perform domain conversion on the frequency-domain filtered result to obtain a reconstructed image. A network was constructed in the image domain to process the image from the back-projection layer. Finally, a multi-scale structural similarity loss function was introduced on the basis of the minimum mean square error loss function to form a composite loss function, which reduced the blur effect of the neural network on the result image and preserved the details of the reconstructed image. The image domain network and the projection domain filter network worked together to finally get the reconstructed result. The effectiveness of the proposed algorithm was verified on the clinical dataset. Compared with the Filtered Back Projection (FBP) algorithm, the Total Variation (TV) algorithm and the image domain Residual Encoder-Decoder CNN (RED-CNN) algorithm, when the number of projections is respectively 180 and 90, the proposed algorithm achieved the reconstructed result image with highest Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM), and the least Normalized Mean Square Error (NMSE).When the number of projections is 360,the proposed algorithm is second only to TV algorithm. The experimental results show that the proposed algorithm can improve the reconstructed image quality of CT image, and it is feasible and effective.

Table and Figures | Reference | Related Articles | Metrics
Event‑driven dynamic collection method for microservice invocation link data
Peng LI, Zhuofeng ZHAO, Han LI
Journal of Computer Applications    2022, 42 (11): 3493-3499.   DOI: 10.11772/j.issn.1001-9081.2021101735
Abstract281)   HTML3)    PDF (2643KB)(102)       Save

Microservice invocation link data is a type of important data generated in the daily operation of the microservice application system, which records a series of service invocation information corresponding to a user request in the microservice application in the form of link. Microservice invocation link data are generated at different microservice deployment nodes due to the distribution characteristic of the system, and the current collection methods for these distributed data include full collection and sampling collection. Full collection may bring large data transmission and data storage costs, while sampling collection may miss critical invocation data. Therefore, an event?driven and pipeline sampling based dynamic collection method for microservice invocation link data was proposed, and a microservice invocation link system that supports dynamic collection of invocation link data was designed and implemented based on the open?source software Zipkin. Firstly, the pipeline sampling was performed on the link data of different nodes that met the predefined event features, that is the same link data of all nodes were collected by the data collection server only when the event defined data was generated by a node; meanwhile, to address the problem of inconsistent data generation rates of different nodes, multi?threaded streaming data processing technology based on time window and data synchronization technology were used to realize the data collection and transmission of different nodes. Finally, considering the problem that the link data of each node arrives at the server in different sequential order, the synchronization and summary of the full link data were realized through the timing alignment method. Experimental results on the public microservice lrevocation ink dataset prove that compared to the full collection and sampling collection methods, the proposed method has higher accuracy and more efficient collection on link data containing specific events such as anomalies and slow responces.

Table and Figures | Reference | Related Articles | Metrics
Application of Stacking-Bagging-Vote multi-source information fusion model for financial early warning
Lu ZHANG, Jiapeng LIU, Dongmei TIAN
Journal of Computer Applications    2022, 42 (1): 280-286.   DOI: 10.11772/j.issn.1001-9081.2021020306
Abstract328)   HTML14)    PDF (948KB)(100)       Save

Ensemble resampling technology can solve the problem of imbalanced samples in financial early warning research to some extent. Different ensemble models and different ensemble resampling technologies have different suitabilities. It is found in the study that Up-Down ensemble sampling and Tomek-Smote ensemble sampling were respectively suitable for Bagging-Vote ensemble model and Stacking fusion model. Based on the above, a Stacking-Bagging-Vote (SBV) multi-source information fusion model was built. Firstly, the Bagging-Vote model based on Up-Down ensemble sampling and the Stacking model based on Tomek-Smote sampling were fused. Then, the stock trading data were added and processed by Kalman filtering, so that the interactive fusion optimization of data level and model level was realized, and the SBV multi-source information fusion model was finally obtained. This fusion model not only has a great improvement in the prediction performance by taking into account prediction accuracy and prediction precision simultaneously, but also can select the corresponding SBV multi-source information fusion model to perform the financial early warning to meet the actual needs of different stakeholders by adjusting the parameters of the model.

Table and Figures | Reference | Related Articles | Metrics
Research progress on driver distracted driving detection
QIN Binbin, PENG Liangkang, LU Xiangming, QIAN Jiangbo
Journal of Computer Applications    2021, 41 (8): 2330-2337.   DOI: 10.11772/j.issn.1001-9081.2020101691
Abstract675)      PDF (2153KB)(458)       Save
With the rapid development of the vehicle industry and world economy, the number of private cars continues to increase, which results in more and more traffic accidents, and traffic safety problem has become a global hotpot. The research of driver distracted driving detection is mainly divided into two types:traditional Computer Vision (CV) algorithms and deep learning algorithms. In the driver distraction detection based on traditional CV algorithm, image features are extracted by the feature operators such as Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradient (HOG), then Support Vector Machine (SVM) is combined to build model and classify the images. However, the traditional CV algorithms have disadvantages of high requirements for the environment, narrow application range, large amount of parameters and high computational complexity. In recent years, deep learning has shown excellent performance such as fast speed and high precision in extracting data features. Therefore, the researchers began to introduce deep learning into driver distracted driving detection. The methods based on deep learning can realize the end-to-end distracted driving detection network with high accuracy. The research status of the traditional CV algorithms and deep learning algorithms in driver distracted driving detection was introduced. Firstly, the situations of the traditional CV algorithms used in the image field and the research of driver distracted driving detection were elaborated. Secondly, the research of driver distracted driving based on deep learning was introduced. Thirdly, the accuracies and model parameters of different driver distracted driving detection methods were compared and analyzed. Finally, the existing research was summarized and three problems that driver distracted driving detection need to solve in the future were put forward:the driver's distraction state and the distraction degree division standards need to be further improved, three aspects of person-car-road need to be considered comprehensively, and how to reduce neural network parameters more effectively.
Reference | Related Articles | Metrics
Imputation algorithm for hybrid information system of incomplete data analysis approach based on rough set theory
PENG Li, ZHANG Haiqing, LI Daiwei, TANG Dan, YU Xi, HE Lei
Journal of Computer Applications    2021, 41 (3): 677-685.   DOI: 10.11772/j.issn.1001-9081.2020060894
Abstract406)      PDF (1135KB)(644)       Save
Concerning the problem of the poor imputation capability of the ROUgh Set Theory based Incomplete Data Analysis Approach (ROUSTIDA) for the Hybrid Information System (HIS) containing multiple attributes such as discrete (e.g., integer, string, and enumeration), continuous (e.g., floating) and missing attributes in the real-world application, a Rough Set Theory based Hybrid Information System for Missing Data Imputation Approach (RSHISMIS) was proposed. Firstly, according to the idea of decision attribute equivalence class partition, HIS was divided to solve the problem of decision rule conflict problem that might occurs after imputation. Secondly, a hybrid distance matrix was defined to reasonably quantify the similarity between objects in order to filter the samples with imputation capability and to overcome the shortcoming of ROUSTIDA that cannot handle with continuous attributes. Thirdly, the nearest-neighbor idea was combined to solve the problem of ROUSTIDA that it cannot impute the data with the same missing attribute in the case of conflict between the attribute values of non-discriminant objects. Finally, experiments were conducted on 10 UCI datasets, and the proposed method was compared with classical algorithms including ROUSTIDA, K Nearest Neighbor Imputation (KNNI), Random Forest Imputation (RFI), and Matrix Factorization (MF). Experimental results show that the proposed method outperforms ROUSTIDA by 81% in recall averagely and 5% to 53% in precision. Meanwhile, the method has the maximal 0.12 reduction of Normalized Root Mean Square Error (NRMSE) compared with ROUSTIDA. Besides, the classification accuracy of the method is 7% higher on average than that of ROUSTIDA, and is also better than those of the imputation algorithms KNNI, RFI and MF.
Reference | Related Articles | Metrics
Early diagnosis and prediction of Parkinson's disease based on clustering medical text data
ZHANG Xiaobo, YANG Yan, LI Tianrui, LU Fan, PENG Lilan
Journal of Computer Applications    2020, 40 (10): 3088-3094.   DOI: 10.11772/j.issn.1001-9081.2020030359
Abstract413)      PDF (1270KB)(826)       Save
In view of the problem of the early intelligent diagnosis for Parkinson's Disease (PD) which occurs more common in the elderly, the clustering technologies based on medical detection text information data were proposed for the analysis and prediction of PD. Firstly, the original dataset was pre-processed to obtain effective feature information, and these features were respectively reduced to eight dimensional spaces with different dimensions by Principal Component Analysis (PCA) method. Then, five traditional classical clustering models and three different clustering ensemble methods were respectively used to cluster the data of eight dimensional spaces. Finally, four clustering performance indexes were selected to predict PD subject with dopamine deficiency as well as healthy control and Scans Without Evidence of Dopamine Deficiency (SWEDD) PD subject. The simulation results show that the clustering accuracy of Gaussian Mixture Model (GMM) reaches 89.12% when the value of PCA feature dimension is 30, the clustering accuracy of Spectral Clustering (SC) is 61.41% when the PCA feature dimension value is 70, and the clustering accuracy of Meta-CLustering Algorithm (MCLA) achieves 59.62% when the PCA feature dimension value is 80. The comparative experiments results show that GMM has the best clustering effect in the five classical clustering methods when the PCA feature dimension value is less than 40 and MCLA has the excellent clustering performance among the three clustering ensemble methods for different feature dimensions, which thereby provides the technical and theoretical supports for the early intelligent auxiliary diagnosis of PD.
Reference | Related Articles | Metrics
Application of improved A * algorithm in indoor path planning for mobile robot
CHEN Ruonan, WEN Congcong, PENG Ling, YOU Chengzeng
Journal of Computer Applications    2019, 39 (4): 1006-1011.   DOI: 10.11772/j.issn.1001-9081.2018091977
Abstract517)      PDF (972KB)(342)       Save
For indoor path planning for mobile robot in particular scenario with multiple U-shape obstacles, traditional A * algorithm has some problems such as ignoring the actual size of robot and long computational time. An improved A * algorithm was proposed to solve these problems. Firstly, a neighborhood matrix was introduced to perform obstacle search, improving path safety. Then, the effects of different types and sizes of neighborhood matrices on the performance of the algorithm were studied and summarized. Finally, heuristic function was improved by combining the angle information and the distance information (calculated in different expressions when situation changes) to improve the calculation efficiency. The experimental results show that the proposed algorithm can obtain different safety spacing by changing the size of obstacle search matrix to ensure the safety of different types of robots in different environments. Moreover, in the complex environment, compared with traditional A * algorithm, path planning speed is improved by 28.07%, and search range is narrowed by 66.55%, so as to improve the sensitivity of the secondary planning of robot when encountering dynamic obstacles.
Reference | Related Articles | Metrics
Multi-channel scheduling strategy in smart distribution network
BAO Xingchuan, PENG Lin
Journal of Computer Applications    2018, 38 (5): 1476-1480.   DOI: 10.11772/j.issn.1001-9081.2017102444
Abstract379)      PDF (870KB)(333)       Save
In order to effectively improve the Quality of Service (QoS) of wireless sensor network-based distribution network and further enhance the real-time and reduce the delay in a distribution network, a multi-channel scheduling strategy based on priority was proposed. First of all, a Link routing algorithm Based on Minimum Hop Spanning Tree (LB-MHST) was proposed to overcome the radio frequency interference and ensure the service quality of the smart grid according to the information of real-time channel state. Then, according to the different delay requirements of different data packets in the distribution network, the priority of data transmission was considered, which effectively improved the data transmission efficiency of the sensing node and further satisfied the QoS requirements in the distribution network. The experimental results show that the proposed algorithm can improve the real-time performance by 12 percentage points, 15.2 percentage points and 18 percentage points, compared with the Minimum Hop Spanning Tree (MHST) algorithm, under the cases with one channel, 8 channels and 16 channels.
Reference | Related Articles | Metrics
Elastic scheduling strategy for cloud resource based on Docker
PENG Liping, LYU Xiaodan, JIANG Chaohui, PENG Chenghui
Journal of Computer Applications    2018, 38 (2): 557-562.   DOI: 10.11772/j.issn.1001-9081.2017081943
Abstract582)      PDF (1012KB)(657)       Save
Considering the problem of elastic scheduling for cloud resources and the characteristics of Ceph data storage, a cloud resource elastic scheduling strategy based on Docker container was proposed. First of all, it was pointed out that the Docker container data volumes are unable to work across different hosts, which brings difficulty to apply online migration, then the data storage method of Ceph cluster was improved. Furthermore, a resource scheduling optimization model based on the comprehensive load of nodes was established. Finally, by combining the characteristics of Ceph cluster and Docker container, the Docker Swarm orchestration was used to achieve container deployment and application online migration in consideration of both data storage and cluster load. The experimental results show that compared with some scheduling strategies, the proposed scheduling strategy achieves elastic scheduling of the cloud platform resources by making a more granular partitioning of the cluster resources, makes a reasonable utilization of the cloud platform resources and reduces the cost of data center operations under the premise of ensuring the application performance.
Reference | Related Articles | Metrics
Ring-based clustering algorithm for nodes non-uniform deployment
SUN Chao, PENG Li, ZHU Xuefang
Journal of Computer Applications    2017, 37 (6): 1527-1531.   DOI: 10.11772/j.issn.1001-9081.2017.06.1527
Abstract1147)      PDF (777KB)(565)       Save
Aiming at the problem of energy hole in the nodes non-uniform deployment network model based on the ring in Wireless Sensor Network (WSN), a Ring-based Clustering Algorithm for Nodes Non-uniform Deployment (RCANND) was proposed. The number of the optimal cluster heads in each ring was calculated by minimizing the energy consumption of each ring in the nodes non-uniform deployment network model. The cluster head selectivity was calculated by using the residual energy of the nodes, the distance from the base station, and the average distance from the neighbor nodes. The cluster head rotation was carried out with the cluster head selection sequence in cluster, and the number of cluster formation phases was reduced to improve the efficiency of network energy utilization. The proposed algorithm was tested in the simulation experiments, the experimental results show that, the average energy consumption fluctuation of nodes under the same radius but different nodes deployment models is very small. The average energy consumption fluctuation of nodes under the same nodes deployment model but different radiuses is not obvious. The network lifetime was defined as the survivability of 50% network nodes. In the case of non-uniform deployment of nodes, the network lifetime of the proposed algorithm is higher than that of Unequal Hybrid Energy Efficient Distributed algorithm (UHEED) by about 18.1% while it is also higher than that of Rotated Unequal Hybrid Energy Efficient Distributed algorithm (RUHEED) by about 11.5%. In the case of uniform deployment of nodes, the network lifetime of the proposed algorithm is higher than that of sub-Ring-based Energy-efficient Clustering Routing for WSN (RECR) by about 6.4%. The proposed algorithm can effectively balance the energy consumption under different nodes deployment models and prolong the network lifetime.
Reference | Related Articles | Metrics
Topology-aware congestion control algorithm in data center network
WANG Renqun, PENG Li
Journal of Computer Applications    2016, 36 (9): 2357-2361.   DOI: 10.11772/j.issn.1001-9081.2016.09.2357
Abstract675)      PDF (819KB)(612)       Save
To solve the congestion problem of links in Data Center Network (DCN), a Topology-Aware Congestion Control (TACC) algorithm was proposed in data center networks. According to the properties of multi-dimensional orthogonality and single-dimensional full mesh in the generalized hypercube, a topology-aware strategy was put forward to find the disjoint routes of distributing the request of flow by the max-flow min-cut theorem. Then, the disjoint routes were adjusted adaptively for satisfying the bandwidth requirement. Finally, the residual bandwith of the selected path was used as the weight to dynamically adjust the flow distribution of each route, so as to achieve the purpose of alleviating network congestion, balancing the load of links and reducing the pressure of recombining data in the destination. The experimental results show that in comparison with Link Criticality Routing Algorithm (LCRA), Multipath Oblivious Routing Algorithm (MORA), Min-Cut Multi-Path (MCMP) and Congestion-Free Routing Strategy (CFRS), TACC algorithm has good performance in link load balancing and deployment time.
Reference | Related Articles | Metrics
Hybrid multi-hop routing algorithm of effective energy-hole avoidance for wireless sensor networks
YANG Xiaofeng, WANG Rui, PENG Li
Journal of Computer Applications    2015, 35 (7): 1815-1819.   DOI: 10.11772/j.issn.1001-9081.2015.07.1815
Abstract403)      PDF (753KB)(557)       Save

In the cluster-based routing algorithm of Wireless Sensor Network (WSN), "energy hole" phenomenon was resulted from energy consumption imbalance between sensors. For this problem, a hybrid multi-hop routing algorithm of effective energy-hole avoidance was put forward on the basis of the research of the flat and hierarchical routing protocols. Firstly, the concept of hotspot area was introduced to divide the monitoring area, and then in clustering stage, the amount of data outside the hotspot area was reduced by using uneven clustering algorithm which could integrate data within the clusters. Secondly, energy consumption was cut down in the hotspot area during clustering stage by no clustering. Finally, in inter-cluster communication phase, the Particle Swarm Optimization (PSO) algorithm was addressed to seek optimal transmission path which could simultaneously meet the minimization of the maximum next hop distance between two nodes in the routing path and the minimization of the maximum hop count, so the minimization of whole network energy consumption was realized. Theoretical analysis and experimental results show that, compared with the Reinforcement-Learning-based Lifetime Optimal routing protocol (RLLO) and Multi-Layer routing protocol through Fuzzy logic based Clustering mechanism (MLFC) algorithm, the proposed algorithm shows better performance in energy efficiency and energy consumption uniformity, and the network lifetime is raised by 20.1% and 40.5%, which can avoid the "energy hole" effectively.

Reference | Related Articles | Metrics
Frequency offset tracking and estimation algorithm in orthogonal frequency division multiplexing based on improved strong tracking unscented Kalman filter
YANG Zhaoyang YANG Xiaopeng LI Teng YAO Kun ZHANG Hengyang
Journal of Computer Applications    2014, 34 (8): 2248-2251.   DOI: 10.11772/j.issn.1001-9081.2014.08.2248
Abstract401)      PDF (697KB)(584)       Save

Towards the large frequency offset caused by Doppler effect in high speed moving environment, a dynamic state space model of Orthogonal Frequency Division Multiplexing (OFDM) was built, and a kind of frequency offset tracking and estimation algorithm in OFDM based on improved Strong Tracking Unscented Kalman Filter (STUKF) was proposed. By combining strong tracking filter theory and UKF together, the fading factor was introduced during the process of calculating the measurement predictive covariance and cross covariance. The frequency offset estimation error covariance was adjusted; meanwhile, the process noise covariance was also controlled, and the gain matrix was adjusted in real-time. So the tracking ability to time-varying frequency offset was enhanced and the estimated accuracy was raised. The simulation test was carried out in time-invariant and time-varying frequency offset models. The simulation results show that the proposed algorithm has better tracking and estimation performance than the UKF frequency offset estimation algorithm, the Signal-to-Noise Ratio (SNR) raises about 1dB under the same Bit Error Rate (BER).

Reference | Related Articles | Metrics
Graph embedding method integrated multiscale features
LI Zhijie LI Changhua YAO Peng LIU Xin
Journal of Computer Applications    2014, 34 (10): 2891-2894.   DOI: 10.11772/j.issn.1001-9081.2014.10.2891
Abstract179)      PDF (797KB)(295)       Save

In the domain of structural pattern recognition, the existing graph embedding methods lack versatility and have high computation complexity. A new graph embedding method integrated with multiscale features based on space syntax theory was proposed to solve this problem. This paper extracted the global, local and detail features to construct feature vector depicting the graph feature by multiscale histogram. The global features included vertex number, edge number, and intelligible degree. The local features referred to node topological feature, edge domain features dissimilarity and edge topological features dissimilarity. The detail features comprised numerical and symbolic attributes on vertex and edge. In this way, the structural pattern recognition was converted into statistical pattern recognition, thus Support Vector Machine (SVM) could be applied to achieve graph classification. The experimental results show that the proposed graph embedding method can achieve higher classifying accuracy in different graph datasets. Compared with other graph embedding methods, the proposed method can adequately render the graphs topology, merge the non-topological features in terms of the graphs domain property, and it has a favorable universality and low computation complexity.

Reference | Related Articles | Metrics
Novel blind frequency offset estimation algorithm in orthogonal frequency division multiplexing system based on particle swarm optimization
YANG Zhaoyang YANG Xiaopeng LI Teng YAO Kun NI
Journal of Computer Applications    2014, 34 (10): 2787-2790.   DOI: 10.11772/j.issn.1001-9081.2014.10.2787
Abstract155)      PDF (763KB)(397)       Save

To estimate the frequency offset in Orthogonal Frequency Division Multiplexing (OFDM) system, a novel blind frequency offset estimation algorithm based on Particle Swarm Optimization (PSO) method was proposed. Firstly the mathematical model and cost function were designed according to the principle of minimum reconstruction error of the reconstructed signal and the signal actually received. The powerful random, parallel, global search property of PSO was utilized to minimize the cost function to get the frequency offset estimation. Two inertia weight strategies for PSO algorithm of constant coefficient and differential descending were simulated, and comparison was made with the minimum output variance and gold section methods. The simulation results show that the proposed algorithm performs highly accuracy, about one order of magnitude higher than other similar algorithms in same Signal-to-Noise Ratio (SNR) and it is not restricted by modulation type and frequency estimation range (-0.5,0.5).

Reference | Related Articles | Metrics
Research and design of trusted cryptography module driver based on unified extensible firmware interface
ZHU Hexin WANG Zhengpeng LIU Yehui FANG Shuiping
Journal of Computer Applications    2013, 33 (06): 1646-1649.   DOI: 10.3724/SP.J.1087.2013.01646
Abstract884)      PDF (673KB)(709)       Save
To extend the application range of Trusted Cryptography Module (TCM) and promote the safety and credibility on terminal machine and cloud platform, this paper analyzed the status quo and tendency of TCM firmware, proposed a TCM firmware driver framework based on Unified Extensible Firmware Interface (UEFI), and designed low-level the driver interface and core protocol based on this framework. This TCM driver adopted module design and layered implementation, made the TCM protocol packaged and registered to UEFI firmware system, and completed the low-level data sending and receiving as well as protocol encapsulation. The test results of TCM firmware driver indicate the high accuracy and effectiveness for this design through the conformance test, functional test as well as pressure test. Besides, the industrial situation also illustrates the feasibility of this driver.
Reference | Related Articles | Metrics
Improved fuzzy auto-regressive model for connection rate prediction
SHEN Chen SUN Yongxiong HUANG Liping LIU Lipeng LI Shuqiu
Journal of Computer Applications    2013, 33 (05): 1222-1229.   DOI: 10.3724/SP.J.1087.2013.01222
Abstract906)      PDF (582KB)(672)       Save
Specific to the need of performance prediction in communication networks, a connection rate prediction method based on fuzzy Auto-Regressive (AR) model was proposed and improved, and the fuzzy AR model based on adaptive fitting degree threshold was studied. The median filtering method was applied to pre-process the data of fuzzy AR model. On this basis, for the uncertain thresholds of some applications, the fitting degree threshold formula was added to the prediction model to make it adaptive. The simulation results show that the predistion method based on fuzzy AR model can be used to predict the connection rate with a higher fitting degree.
Reference | Related Articles | Metrics
Medical image classification based on scale space multi-feature fusion
LI Bo CAO Peng LI Wei ZHAO Dazhe
Journal of Computer Applications    2013, 33 (04): 1108-1111.   DOI: 10.3724/SP.J.1087.2013.01108
Abstract817)      PDF (811KB)(506)       Save
In order to describe different kinds of medical image more consistently and reduce the scale sensitivity, a classification model based on scale space multi-feature fusion was proposed according to the characteristics of medical image. First, scale space was built by difference of Gaussian, and then complementary features were extracted, such as gray-scale features, texture features, shape features, and features extracted in the frequency domain. In addition, maximum likelihood estimation was considered to realize decision level fusion. The scale space multi-feature fusion classification model was applied to medical image classification task following IRMA code. The experimental results show that compared with traditional methods, F1 value increased 5%-20%. Fusion classification model describes medical image more comprehensively, avoids the information loss from feature dimension reduction, improves classification accuracy, and has clinical value.
Reference | Related Articles | Metrics
Imbalanced data learning based on particle swarm optimization
CAO Peng LI Bo LI Wei ZHAO Dazhe
Journal of Computer Applications    2013, 33 (03): 789-792.   DOI: 10.3724/SP.J.1087.2013.00789
Abstract1085)      PDF (630KB)(473)       Save
In order to improve the classification performance on the imbalanced data, a new Particle Swarm Optimization (PSO) based method was introduced. It optimized the re-sampling rate and selected the feature set simultaneously, with the imbalanced data evaluation metric as objective function through particle swarm optimization, so as to achieve the best data distribution. The proposed method was tested on a large number of UCI datasets and compared with the state-of-the-art methods. The experimental results show that the proposed method has substantial advantages over other methods; moreover, it proves that it can effectively improve the performance on the imbalanced data by optimizing the re-sampling rate and feature set simultaneously.
Reference | Related Articles | Metrics
Adaptive random subspace ensemble classification aided by X-means clustering
CAO Peng LI Bo LI Wei ZHAO Dazhe
Journal of Computer Applications    2013, 33 (02): 550-553.   DOI: 10.3724/SP.J.1087.2013.00550
Abstract996)      PDF (700KB)(402)       Save
To solve low accuracy and efficiency issues on the large-scale data classification, an adaptive random subspace ensemble classification algorithm aided by the X-means clustering was proposed. X-means clustering was adopted to separate the original data space into multiple clusters automatically, maintaining the original data structure; moreover adaptive random subspace ensemble classifier enhanced diversity of the base components and determined the size of base classifiers automatically, so as to improve the robustness and accuracy. The experimental results show that the proposed method improves the traditional single and ensemble classifiers with respect to accuracy and robustness on the large scale datasets with high dimension. Furthermore, it improves the overall efficiency of the algorithm.
Related Articles | Metrics
Quorum generation algorithm with time complexity of O(n)
WU Peng LI Meian
Journal of Computer Applications    2013, 33 (02): 323-360.   DOI: 10.3724/SP.J.1087.2013.00323
Abstract1016)      PDF (557KB)(379)       Save
It is necessary to generate the quorums as soon as possible in large-scale fully distributed system for its mutual exclusion problem. Based on the theory of relaxed cyclic difference set, the definition of second relaxed cyclic difference set was proposed. After researching the new concepts, the subtraction steps in previously classical methods can be changed into summation steps. Furthermore, a lot of summation steps can be cut down by the recurrence relation deduced from the summation steps. The time complexity of this algorithm is just only O(n) and the size of the symmetric quorums is still close to 2n^(1/2).
Related Articles | Metrics
Face recognition algorithm based on multi-level texture spectrum features and PCA
DANG Xin-peng LIU Wen-ping
Journal of Computer Applications    2012, 32 (08): 2316-2319.   DOI: 10.3724/SP.J.1087.2012.02316
Abstract1006)      PDF (603KB)(338)       Save
To improve the recognition rate of Principal Component Analysis (PCA) algorithm in face recognition, a new algorithm combining the image texture spectrum feature with PCA was proposed. Firstly, the texture unit operator was used to extract the texture spectrum feature of the face image. Secondly, PCA approach was used to reduce the dimensions of the texture spectrum feature. Finally, K-Nearest Neighbor (KNN) classification was chosen to recognize the face. ORL and Yale face database were used to test the proposed algorithm, and the recognition accuracies were 96.5% and 95% respectively, which were higher than those of PCA and Modular Two-Dimensional PCA (M2DPCA). The experimental results demonstrate the efficiency and accuracy of the proposed algorithm.
Reference | Related Articles | Metrics
Performance evaluation of space information data processing system based on queuing network
WANG Jian-jiang QIU Di-shan PENG Li
Journal of Computer Applications    2012, 32 (03): 870-873.   DOI: 10.3724/SP.J.1087.2012.00870
Abstract1174)      PDF (555KB)(558)       Save
In order to scientifically evaluate the performance of Space Information Data Processing System (SIDPS), this paper presented an evaluation method based on queuing network. The processing patterns of space information data were analyzed. In addition, core index systems of performance evaluation were constructed, and a performance evaluation model of SIDPS with limited waiting queuing network was established. The experimental results confirm the effectiveness of the approach.
Reference | Related Articles | Metrics
Optimal deployment of multiple sink nodes in wireless sensor networks
LIU Qiang MAO Yu-ming LENG Su-peng LI Long-jiang ZHUANG Yi-qun
Journal of Computer Applications    2011, 31 (09): 2313-2316.   DOI: 10.3724/SP.J.1087.2011.02313
Abstract1631)      PDF (647KB)(487)       Save
In a large-scale Wireless Sensor Network (WSN), the nodes closer to the single sink node use up their energy more quickly than others because of relaying more packets so that the network is invalid rapidly. In order to elongate the network lifetime, it is required to deduce the hops from sensor node to sink node. An efficient method is to deploy multiple sink nodes instead of single one. Therefore, it needs to be considered that how many sink nodes should be deployed on minimizing network cost and maximizing network lifetime. A network lifetime model and a cost model were proposed in WSN with multiple sink nodes and a new method was presented to determine the optimal number of sink nodes by computing the Ratio of Lifetime to Cost (RLC). The theoretical studies show that the number of sink nodes is related to the cost of sensor nodes and sink nodes, the network scale, the number of critical sensor nodes and the transmission power of sensor node. The simulation results prove the theoretical conclusion.
Related Articles | Metrics
Data mining in P2P networks
Tian-Peng LIU
Journal of Computer Applications   
Abstract1832)      PDF (611KB)(1264)       Save
To analyze both the operational mechanism of current distributed data mining and the characteristics of the P2P technology: non-centralized peer and asynchronism, by extending the iterative process of classical K-mean algorithm, a distributed data mining algorithm was designed in this paper to implement k-mean thinking in a P2P networks. This algorithm exchanges information only between directly connected nodes, and can cluster local data on each peer in a global view. Finally, simulation experiments show that the algorithm is effective and accurate.
Related Articles | Metrics